Automated machine learning (AutoML) algorithms have grown in popularity due to their high performance and flexibility to adapt to different problems and data sets. With the increasing number of AutoML algorithms, deciding which would best suit a given problem becomes increasingly more work. Therefore, it is essential to use complex and challenging benchmarks which would be able to differentiate the AutoML algorithms from each other. This paper compares the performance of four different AutoML algorithms: Tree-based Pipeline Optimization Tool (TPOT), Auto-Sklearn, Auto-Sklearn 2, and H2O AutoML. We use the Diverse and Generative ML benchmark (DIGEN), a diverse set of synthetic datasets derived from generative functions designed to highlight the strengths and weaknesses of the performance of common machine learning algorithms. We confirm that AutoML can identify pipelines that perform well on all included datasets. Most AutoML algorithms performed similarly without much room for improvement; however, some were more consistent than others at finding high-performing solutions for some datasets.
translated by 谷歌翻译
在生物医学数据中寻求预测模型时,人们通常会想到一个目标,例如,具有高精度和低复杂性(以促进可解释性)。我们在此研究是否可以通过我们最近提出的协调算法,安全(解决方案和健身进化)动态调整多个目标。我们发现,与配子工具生成的复杂模拟遗传数据集相比,与标准进化算法相比,Safe能够自动调整精度和复杂性,而无需损失,而没有性能损失。
translated by 谷歌翻译
我们最近提出了安全的 - 解决方案和健身进化 - 一种相应的协调算法,该算法维持两个共同发展的人群:候选解决方案和候选目标函数的种群。我们表明,安全在机器人迷宫领域内发展溶液的成功。本文中,我们介绍了Safe的适应和对多目标问题的应用的研究,其中候选目标功能探索了每个目标的不同权重。尽管初步的结果表明,安全以及共同发展的解决方案和目标功能的概念可以识别一组类似的最佳多物镜解决方案,而无需显式使用帕累托前锋进行健身计算和父母选择。这些发现支持我们的假设,即安全算法概念不仅可以解决复杂的问题,而且可以适应多个目标问题的挑战。
translated by 谷歌翻译
最近,我们强调了一个基本问题,该问题被认为是混淆算法优化的,即\ textit {Confing}与目标函数的目标。即使前者的定义很好,后者也可能并不明显,例如,在学习一种策略来导航迷宫以找到目标(客观)时,有效的目标函数\ textit {评估}策略可能不是一个简单的功能到目标的距离。我们建议自动化可能发现良好的目标功能的手段 - 此处得到的建议。我们提出\ textbf {s} iolution \ textbf {a} nd \ textbf {f} itness \ textbf {e} volution(\ textbf {safe}),a \ textit {comensalistic} coovolutionary algorithm候选解决方案和一系列候选目标功能。作为此概念原理的证明,我们表明安全不仅成功地发展了机器人迷宫领域内的解决方案,而且还可以在进化过程中衡量解决方案质量所需的目标函数。
translated by 谷歌翻译
通过替换嵌入式弱学习者以强有力(ER)的一个来修改标准梯度提升,我们提出锡尔博:符号回归的提升。超过98个回归数据集的实验表明,通过将少量的增强阶段(在2--5之间)添加到符号回归器中,通常可以实现统计学上显着的改进。我们注意到,在任何符号回归器之上的编码锡尔布都很简单,而增加的成本只是更多的进化回合。锡尔博本质上是一个简单的附加组件,可以很容易地添加到现存的符号回归器中,通常会带有有益的结果。
translated by 谷歌翻译
Photo-identification (photo-id) is one of the main non-invasive capture-recapture methods utilised by marine researchers for monitoring cetacean (dolphin, whale, and porpoise) populations. This method has historically been performed manually resulting in high workload and cost due to the vast number of images collected. Recently automated aids have been developed to help speed-up photo-id, although they are often disjoint in their processing and do not utilise all available identifying information. Work presented in this paper aims to create a fully automatic photo-id aid capable of providing most likely matches based on all available information without the need for data pre-processing such as cropping. This is achieved through a pipeline of computer vision models and post-processing techniques aimed at detecting cetaceans in unedited field imagery before passing them downstream for individual level catalogue matching. The system is capable of handling previously uncatalogued individuals and flagging these for investigation thanks to catalogue similarity comparison. We evaluate the system against multiple real-life photo-id catalogues, achieving mAP@IOU[0.5] = 0.91, 0.96 for the task of dorsal fin detection on catalogues from Tanzania and the UK respectively and 83.1, 97.5% top-10 accuracy for the task of individual classification on catalogues from the UK and USA.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
扩展语言模型已被证明可以预测提高各种下游任务的性能和样本效率。相反,本文讨论了一种不可预测的现象,我们将其称为大语言模型的新兴能力。如果在较小的模型中不存在,而是在较大的模型中存在,那么我们认为它可以突然出现。因此,不仅可以通过推断较小模型的性能来预测紧急能力。这种出现的存在意味着额外的扩展可以进一步扩大语言模型的能力范围。
translated by 谷歌翻译